2 research outputs found

    The Affordances of Social Annotation in the Online Writing Classroom: A Community of Inquiry Analysis

    Get PDF
    Social annotation (SA) is a genre of web-based applications that allow users to annotate texts and to see and respond to annotations others have written. To explore the potential of SA for teaching writing online, findings from twelve empirical studies of SA in education were analyzed through the lens of the Community of Inquiry framework. Results indicate that SA can contribute to cognitive presence, social presence, and teaching presence in online writing instruction. SA supports collaborative understanding and unpacking of texts. Students can identify and discuss main ideas, claims, rhetorical moves, evidence, etc. They can ask and answer questions. The demands of articulating their views and comparing them to those of others cause students to become aware of their thought processes, a metacognitive development. Pedagogical strategies such as highlighting important concepts and seeding expert annotations can focus attention and scaffold learning. SA can promote the development of community as students collaborate and encourage one another. Teachers can plan and monitor SA activities aligned with learning outcomes. The findings provide insights to guide incorporation of SA in online writing instruction

    Two Steps Forward, One Step Back: A Computer-aided Error Analysis of Grammar Errors in EAP Writing

    Get PDF
    This study consists of a computer-aided error analysis of grammar errors in 70 university placement essays, scores on which resulted in students being either placed in EAP (English for Academic Purposes) Level 1, placed in EAP Level 2, or exempted from the EAP program. Essay scoring happened prior to the study, using the department process whereby each essay was scored by at least two raters using an analytic rubric. An error taxonomy of 16 categories based on Lane and Lange (1999) was used to code the essay data. Data was assembled into a corpus and tagged using the text analysis program UAM (Universidad Autónoma de Madrid) CorpusTool. Results were exported and analyzed with statistical tests. The results of the study validate the EAP placement process. Scores in the language use section of the rubric were highly correlated with total scores, and inter-rater reliability was also found. Errors rates were also found to correlate with language use score, suggesting that raters were responding to grammatical errors in making their assessments. Comparisons between the three placement groups revealed significant differences in error rates between Level 2 and Exempt. Based on the correlations, between-group comparisons, and overall frequency of errors, six error categories were chosen for closer analysis: sentence structure, articles, prepositions, singular/plural, subordinate clauses, and other. The findings suggest that local errors, though often given low priority in textbooks, do significantly impact rater assessment. Results also suggest that error rates do not necessarily decrease with advancing level—some error rates may increase. Though this finding was surprising, it might be attributed in part to the fact that some errors can be evidence of interlanguage development as new forms are acquired. The study concludes with suggestions for teaching and future research
    corecore